Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add filters

Language
Document Type
Year range
1.
IEEE Transactions on Parallel and Distributed Systems ; : 2015/01/01 00:00:00.000, 2023.
Article in English | Scopus | ID: covidwho-2232135

ABSTRACT

Simulation-based Inference (SBI) is a widely used set of algorithms to learn the parameters of complex scientific simulation models. While primarily run on CPUs in High-Performance Compute clusters, these algorithms have been shown to scale in performance when developed to be run on massively parallel architectures such as GPUs. While parallelizing existing SBI algorithms provides us with performance gains, this might not be the most efficient way to utilize the achieved parallelism. This work proposes a new parallelism-aware adaptation of an existing SBI method, namely approximate Bayesian computation with Sequential Monte Carlo(ABC-SMC). This new adaptation is designed to utilize the parallelism not only for performance gain, but also toward qualitative benefits in the learnt parameters. The key idea is to replace the notion of a single ‘step-size’hyperparameter, which governs how the state space of parameters is explored during learning, with step-sizes sampled from a tuned Beta distribution. This allows this new ABC-SMC algorithm to more efficiently explore the state-space of the parameters being learned. We test the effectiveness of the proposed algorithm to learn parameters for an epidemiology model running on a Tesla T4 GPU. Compared to the parallelized state-of-the-art SBI algorithm, we get similar quality results in <inline-formula><tex-math notation="LaTeX">$\sim 100 \times$</tex-math></inline-formula> fewer simulations and observe <inline-formula><tex-math notation="LaTeX">$\sim 80 \times$</tex-math></inline-formula> lower run-to-run variance across 10 independent trials. IEEE

2.
Ieee Transactions on Information Theory ; 68(4):2604-2621, 2022.
Article in English | Web of Science | ID: covidwho-1769667

ABSTRACT

The group testing problem is concerned with identifying a small set of infected individuals in a large population. At our disposal is a testing procedure that allows us to test several individuals together. In an idealized setting, a test is positive if and only if at least one infected individual is included and negative otherwise. Significant progress was made in recent years towards understanding the information-theoretic and algorithmic properties in this noiseless setting. In this paper, we consider a noisy variant of group testing where test results are flipped with certain probability, including the realistic scenario where sensitivity and specificity can take arbitrary values. Using a test design where each individual is assigned to a fixed number of tests, we derive explicit algorithmic bounds for two commonly considered inference algorithms and thereby naturally extend the results of Scarlett & Cevher (2016) and Scarlett & Johnson (2020). We provide improved performance guarantees for the efficient algorithms in these noisy group testing models - indeed, for a large set of parameter choices the bounds provided in the paper are the strongest currently proved.

SELECTION OF CITATIONS
SEARCH DETAIL